Fenchel Dual Gradient Methods for Distributed Convex Optimization Over Time-Varying Networks
نویسندگان
چکیده
منابع مشابه
Universal gradient methods for convex optimization problems
In this paper, we present new methods for black-box convex minimization. They do not need to know in advance the actual level of smoothness of the objective function. Their only essential input parameter is the required accuracy of the solution. At the same time, for each particular problem class they automatically ensure the best possible rate of convergence. We confirm our theoretical results...
متن کاملDistributed Convex Optimization with Inequality Constraints over Time-varying Unbalanced Digraphs
This paper considers a distributed convex optimization problem with inequality constraints over time-varying unbalanced digraphs, where the cost function is a sum of local objectives, and each node of the graph only knows its local objective and inequality constraints. Although there is a vast literature on distributed optimization, most of them require the graph to be balanced, which is quite ...
متن کاملAn Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that a...
متن کاملAn Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
xi=x̄i when ‖∇xif(x̄)‖2 ≤ λBi, it follows that x̄i = x̄i if and only if ‖∇xif(x̄)‖2 ≤ λBi. Hence, hi(x̄ ∗ i ) = 0. Case 2: Suppose that i ∈ Ic := N \ I, i.e., ‖∇xif(x̄)‖2 > λBi. In this case, x̄i 6= x̄i. From the first-order optimality condition, we have ∇xif(x̄) + Li(x̄i − x̄i) + λBi x̄ ∗ i −x̄i ‖x̄i −x̄i‖2 = 0. Let si := x̄∗i −x̄i ‖x̄i −x̄i‖2 and ti := ‖x̄i − x̄i‖2, then si = −∇xif(x̄) Liti+λBi . Since ‖si‖2 = 1, i...
متن کاملConvergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2019
ISSN: 0018-9286,1558-2523,2334-3303
DOI: 10.1109/tac.2019.2901829